100 research outputs found

    DESIGN AND CHARACTERIZATION OF LOW-POWER LOW-NOISE ALLDIGITAL SERIAL LINK FOR POINT-TO-POINT COMMUNICATION IN SOC

    Get PDF
    The fully-digital implementation of serial links has recently emerged as a viable alternative to their classical analogue counterpart. Indeed, reducing the analogue content in favour of expanding the digital content becomes more attractive due to the ability to achieve less power consumption, less sensitivity to the noise and better scalability across multiple technologies and platforms with inconsiderable modifications. In addition, describing the circuit in hardware description languages gives it a high flexibility to program all design parameters in a very short time compared with the analogue designs which need to be re-designed at transistor level for any parameter change. This can radically reduce cost and time-to-market by saving a significant amount of development time. However, beside these considerable advantages, the fully-digital architecture poses several design challenges

    A Framework for Designing Efficient Deep Learning-Based Genomic Basecallers

    Full text link
    Nanopore sequencing generates noisy electrical signals that need to be converted into a standard string of DNA nucleotide bases using a computational step called basecalling. The accuracy and speed of basecalling have critical implications for all later steps in genome analysis. Many researchers adopt complex deep learning-based models to perform basecalling without considering the compute demands of such models, which leads to slow, inefficient, and memory-hungry basecallers. Therefore, there is a need to reduce the computation and memory cost of basecalling while maintaining accuracy. Our goal is to develop a comprehensive framework for creating deep learning-based basecallers that provide high efficiency and performance. We introduce RUBICON, a framework to develop hardware-optimized basecallers. RUBICON consists of two novel machine-learning techniques that are specifically designed for basecalling. First, we introduce the first quantization-aware basecalling neural architecture search (QABAS) framework to specialize the basecalling neural network architecture for a given hardware acceleration platform while jointly exploring and finding the best bit-width precision for each neural network layer. Second, we develop SkipClip, the first technique to remove the skip connections present in modern basecallers to greatly reduce resource and storage requirements without any loss in basecalling accuracy. We demonstrate the benefits of RUBICON by developing RUBICALL, the first hardware-optimized basecaller that performs fast and accurate basecalling. Compared to the fastest state-of-the-art basecaller, RUBICALL provides a 3.96x speedup with 2.97% higher accuracy. We show that RUBICON helps researchers develop hardware-optimized basecallers that are superior to expert-designed models
    • …
    corecore